Quantifying Membership Privacy via Information Leakage
نویسندگان
چکیده
Machine learning models are known to memorize the unique properties of individual data points in a training set. This memorization capability can be exploited by several types attacks infer information about data, most notably, membership inference attacks. In this paper, we propose an approach based on leakage for guaranteeing privacy. Specifically, use conditional form notion maximal quantify leaking entries dataset, i.e., entrywise leakage. We apply our privacy analysis Private Aggregation Teacher Ensembles (PATE) framework privacy-preserving classification sensitive and prove that its aggregation mechanism is Schur-concave when injected noise has log-concave probability density. The Schur-concavity implies increased consensus among teachers labeling query reduces associated cost. Finally, derive upper bounds uses Laplace distributed noise.
منابع مشابه
Quantifying Location Privacy Leakage from Transaction Prices
Large-scale datasets of consumer behavior might revolutionize the way we gain competitive advantages and increase our knowledge in the respective domains. At the same time, valuable datasets pose potential privacy risks that are difficult to foresee. In this paper we study the impact that the prices from consumers’ purchase histories have on the consumers’ location privacy. We show that using a...
متن کاملQuantifying Privacy Leakage through Answering Database Queries
We assume a database consists of records of individuals with private or sensitive fields. Queries on the distribution of a sensitive field within a selected population in the database can be submitted to the data center. The answers to the queries leak private information of individuals though no identification information is provided. Inspired by decision theory, we present a quantitative mode...
متن کاملQuantifying Information Leakage of Randomized Protocols
The quantification of information leakage provides a quantitative evaluation of the security of a system. We propose the usage of Markovian processes to model deterministic and probabilistic systems. By using a methodology generalizing the lattice of information approach we model refined attackers capable to observe the internal behavior of the system, and quantify the information leakage of su...
متن کاملQuantifying Information Leakage in Process Calculi
We study two quantitative models of information leakage in the pi-calculus. The first model presupposes an attacker with an essentially unlimited computational power. The resulting notion of absolute leakage, measured in bits, is in agreement with secrecy as defined by Abadi and Gordon: a process has an absolute leakage of zero precisely when it satisfies secrecy. The second model assumes a res...
متن کاملA Model for Quantifying Information Leakage
We study data privacy in the context of information leakage. As more of our sensitive data gets exposed to merchants, health care providers, employers, social sites and so on, there is a higher chance that an adversary can “connect the dots” and piece together a lot of our information. The more complete the integrated information, the more our privacy is compromised. We present a model that cap...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Information Forensics and Security
سال: 2021
ISSN: ['1556-6013', '1556-6021']
DOI: https://doi.org/10.1109/tifs.2021.3073804